This is a false dichotomy, and both approaches are unsustainable, especially in programming. Proponents of top-down design are usually also falling for C++-style or Java-style "OOP": a program must model the relationships and hierarchies of objects encapsulating data using inheritance and abstraction. Compartmentalising data into individual units that can perform their own tasks makes it easier to conceptualise and direct large systems, which is required to solve certain problems. However, this also leads to very inefficient code, as the actual work being done needs to adapt to the way the data is structured, and not the way a computer could perform it best. Just eliminating abstractions, hierarchies and decompartmentalising data can lead to speedups of multiple orders of magnitude, which is the main point of the bottom-up approach. However, bottom-up programs are harder to change, as the data structures revolve around the initial goal and algorithm. This makes large bottom-up projects unsustainable. Technical debt creeps up and spirals out of control as incomplete changes are made to large code bases. However, when observing nature, it is evident that a bottom-up view of phenomena is just as valid as a top-down view (when looking at organisms, for example). Herein lies the false dichotomy of top-down vs. bottom-up paradigms in programming: the tools (i.e., programming languages) we have nowadays are simply incapable of combining top-down descriptions with bottom-up behaviour, but this is not necessitated. We want, and we could have, the performance of bottom-up design, but with the intuitive description of systems of top-down design. Jonathan Blow's jai language and its SOA (structure of arrays) vs. AOS (array of structures) support is just one example showing how you can easily solve parts of the dichotomy intuitively, in this case, by decompartmentalising/destructuring. Consider a memory pool / arena: [T:TYPE] ArenaItem { Present: BOOL; Item: T; } Arena: [U8]ArenaItem[1001]; Most languages would organise the arena as follows: `{ {Present, Item}, {Present, Item}, ... }`. This is bad because 1) most languages insert padding bytes to align each variable to a multiple of its largest primitive member, and primitive variables are also size-aligned. That means, the type `[U8]ArenaItem` would look like this: `{ Present: BOOL; BYTE[7]; Item: U8; }`, using only 9 out of 16 bytes. Using SOA destructuring, `Arena` is converted to the following: [T:TYPE; N: NUMBER] ArenaItemSOA { Present: BOOL[N]; Item: T[N]; } Arena: [U8; 1001]ArenaItemSOA; It is thus now laid out in memory as follows: `{ Present: BOOL[1001]; BYTE[7]; Item: U8[1001]; }` Instead of wasting padding 7 bytes per item, we only waste 7 in total. Additionally, when iterating over the arena, trying to find all entries with the `Present` flag enabled, we now have a very dense stream of meaningful data, whereas looping over the AOS version would only have one meaningful byte every 16 bytes. This wastes the memory bandwidth, as processors fetch data in chunks of usually 64 bytes, and most of the data that's being fetched is completely ignored, resulting in slow iterations and lots of stalling. We can now intuitively describe coupled data in a way that does not cost us performance. But this is just the beginning. The next step is to tackle the next problem that causes extreme slow-downs when programming top-down: polymorphic abstraction. This problem is two-fold: 1. polymorphic abstract types can have multiple differently-sized concretisations, and therefore, there is no fixed size an instance of an abstract type would have. This prevents tight packing in arrays (as the element size has to be constant), and forces dynamic allocation of individual elements. Iterating over an array of abstract elements means you have to iterate over an array of pointers (pointer chasing), which is bad for the cache performance, as accesses are most likely fragmented. 2. the abstraction hides the details of the concretising object unless we do a check (black-box abstraction). Every place that would like to efficiently handle the object now has to manually perform checks to determine its kind. An abstract type, and by extension, its concretising types as well, have a size overhead of storing a reference to a function table, which additionally also causes a performance penalty for invoking abstract operations. Abstract objects require dynamic allocation in almost all cases, incurring another expensive overhead. However, both problems are easily solved: ditch the virtual table reference from the abstract type, and create an implicit tagged union type for the abstract base type, containing all concretising types and a short type identifier (usually a single byte suffices). When referring to an instance of an abstract type, the tagged union type is used implicitly. This has the benefit of being SOA-compatible, having an array of type tags, and an array of union storages. It can even be enhanced further by splitting it into a struct containing arrays for each concretising type, leading to data coherence and even better packing, at the overhead of managing more arrays (one per concretising type). The drawback of this is that now, a pointer to an abstract type is actually a pair `{TypeTag, VOID *}`, where the type tag helps interpret the object. However, it is actually extremely rare for pointers to objects to be required in large numbers, if not necessitated by poor polymorphism support. Even then, the type tag is not larger than the virtual function table pointer, so this could only possibly have a drawback if many pointers would point to the same instance. But even that can be alleviated by using SOA to destructure the type containing the pointers, and to destructure the pointers into an two arrays: `{TypeTag[]; VOID*[];}`, which would again lead to much more compact storages. Even if this also leads to the spreading of data, it is in no way worse than having all instances be dynamically allocated individually. Now, we have solved the storage problem of polymorphic abstraction. The next problem is the run-time overhead. Employing compile-time generics, we can make it so that functions that operate on a generic value become duplicated for each concretising type (of course, this is only possible to a certain degree), so that a caller overloads the function with the concrete type he is passing to it. This makes all previously abstract operations within a function concrete, eliminating virtual function table lookups. Additionally, we can go one step further, and even let abstract return values "overload" the caller, by implicitly branching its execution to select the concrete path depending on the value's type. This can make subsequent otherwise abstract operations concrete. Of course, this solution to abstraction is tricky, as it results in infinitely sized code for recursive programs, but that is an inherent limitation of compile time generics. Thus, just like with function inlining, the compiler has to decide where it seems expedient to concretise and where to stay abstract. Still, with three simple techniques (SOA, polymorphic unionisation, and abstract concretisation), all of which can be performed by the compiler without requiring much direction from the programmer, we can have most of the performance of hand-tailored bottom-up code, while keeping almost all of the maintainability and legibility of top-down code.